Left to Right: Larsson Johnson, Michael Habisohn, Amirhossein Taghvaei, Meridan Markowitz

Course 1 Best Time: -61.25 sec

Course 2 Best Time: -90.25 sec

PROJECT OVERVIEW

Our goal by the end of SE 423 was to autonomously navigate a randomly organized course consisting of five 2x2 boxes (obstacles) and five (orange or blue) “weeds”. These weeds were paper circular cutouts placed randomly on the course. We were required to navigate to five waypoints, all the while searching for weeds to kill. We kill/eliminate the weeds by sitting still directly over them for at least 1 second. Some of the sensory data we used for the robot are as follows: onboard compass, gyro, LADAR, and OptiTrack- a camera vision system.

...

Figure 2

 

PATH PLANNING

For autonomous maze navigation our group used the A-star algorithm. This algorithm was fed a map with currently known obstacle locations and would return a path for the robot to take broken into multiple steps. The steps were calculated based on a weight of each neighboring node and the most favorable node was chosen after calculation. Each step would then take the robot to a particular neighboring node and once all steps had been successfully executed the desired destination was obtained.

It was critical that our robot continued to obstacle detect during path navigation and if at any time our robot detected a new obstacle our master map would be updated. This updated map would then be fed to A-star and a new path planned with the new information then accounted for.

 

TARGET DETECTION

To detect the obstacles, we employed a 240 degree LADAR system. The robot was programmed to determine where the obstacles detected were, relative to the robot, and in the A-star coordinate system. We then rounded the value to the nearest integer value and set the value of that point as a boundary in the A-star algorithm and recalculated our path. These values then were sent up to LabView to display an accurate representation of the map obstacles.

To detect the weeds we used a similar approach in that we calculated the distance of the colored dots that were in the camera’s view via an equation we had determined in a lab session. When the robot saw a pink or blue object, it would turn towards it, calculate the distance of the object, assign it an A-star coordinate, and check if it had already killed a weed in that location before. If the robot had not, it proceeded to path plan to that weeds location. Once the weed was killed, the robot continued with its waypoint path planning.

 

MAZE REPORT

The final task of our robot, once it had successfully navigated to all five required way points, was to exit the maze and enter two circular report locations; as can be seen in Figure 1 these locations were either blue or pink. Once in a location of a particular color the robot had to report how many weeds of that particular color it had seen using servos. During the navigation of the maze the robot was also reporting back to a desktop LabView application the location of weeds and obstacles.  

 

Videos

Course 1, Run 1

Course 1, Run 2

Course 2, Run 1

Course 2, Run 2